Lectures 13-14: Concentration for Sums of Random Matrices

نویسنده

  • James R. Lee
چکیده

The trace of A is Tr(A) ∑di 1 Aii ∑di 1 λi(A). The trace norm of A is ‖A‖∗ ∑di 1 |λi(A)|. A symmetric matrix is positive semide nite (PSD) if all its eigenvalues are nonnegative. Note that for a PSD matrix A, we have Tr(A) ‖A‖∗. We also recall the matrix exponential eA ∑∞k 0 Ak k! which is well-de ned for all real symmetric A and is itself also a real symmetric matrix. If A is symmetric, then eA is always PSD, as the next argument shows. Every real symmetric matrix can be diagonalized, writing A UT DU, where U is an orthogonal matrix, i.e. UUT UTU I and D is diagonal. One can easily check that Ak UT DkU for any k ∈ Ž, thus Ak and A are simultaneously diagonalizable. It follows that A and eA are simultaneously diagonalizable. In particular, we have λi(eA) eλi(A). Finally, note that for symmetric matrices A and B, we have Tr(AB) 6 ‖A‖ · ‖B‖∗. To see this, let {ui} be an orthonormal basis of eigenvectors of B with Bui λi(B)ui . Then

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lecture 2: Matrix Chernoff bounds

The purpose of my second and third lectures is to discuss spectral sparsifiers, which are the second key ingredient in most of the fast Laplacian solvers. In this lecture we will discuss concentration bounds for sums of random matrices, which are an important technical tool underlying the simplest sparsifier construction.

متن کامل

User-friendly Tail Bounds for Matrix Martingales

This report presents probability inequalities for sums of adapted sequences of random, self-adjoint matrices. The results frame simple, easily verifiable hypotheses on the summands, and they yield strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. The methods also specialize to sums of independent random matrices. 1. Main Results This technical report is...

متن کامل

Introduction to the non-asymptotic analysis of random matrices

2 Preliminaries 7 2.1 Matrices and their singular values . . . . . . . . . . . . . . . . . . 7 2.2 Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Sub-gaussian random variables . . . . . . . . . . . . . . . . . . . 9 2.4 Sub-exponential random variables . . . . . . . . . . . . . . . . . . 14 2.5 Isotropic random vectors . . . . . . . . . . . . . . . . . . . . . . ...

متن کامل

Concentration of Haar measures , with an application to random matrices

We present a novel approach to measure concentration that works through a deeper investigation of the semigroup method. In particular, we show how couplings and rates of convergence of Markov chains can be used to obtain concentration bounds. As an application, we obtain a measure concentration result for random unitary matrices and other kinds of Haar-distributed random variables, which allows...

متن کامل

The Almost Sure Convergence for Weighted Sums of Linear Negatively Dependent Random Variables

In this paper, we generalize a theorem of Shao [12] by assuming that is a sequence of linear negatively dependent random variables. Also, we extend some theorems of Chao [6] and Thrum [14]. It is shown by an elementary method that for linear negatively dependent identically random variables with finite -th absolute moment the weighted sums converge to zero as where and is an array of...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015